近年来,在数字病理应用中,在研究和临床环境中越来越普遍的部署这些模型的部署证明了在数字病理应用中的深度学习模型的开发方面取得了巨大进步。尽管此类模型在解决DP应用程序中的基本计算任务方面表现出了前所未有的表现,但在适应转移学习的看不见数据时,它们会遭受灾难性的遗忘。随着对深度学习模型的需求越来越多地处理不断变化的数据分布,包括不断发展的患者人群和新的诊断测定法,持续的学习模型减轻了模型忘记的遗忘,需要在基于DP的分析中引入。但是,据我们所知,没有针对DP特定应用的此类模型的系统研究。在这里,我们提出了DP设置中的CL方案,其中的组织病理学图像数据来自不同来源/分布,其知识已集成到单个模型中,而无需从头开始训练所有数据。然后,我们建立了一个用于结直肠癌H&E分类的增强数据集,以模拟图像外观的变化,并在拟议的CL方案中评估了CL模型性能。我们利用乳腺肿瘤H&E数据集以及结直肠癌来评估不同肿瘤类型的CL。此外,我们在注释和计算资源的限制下在在线几弹性设置中评估了CL方法。我们揭示了DP应用中CL的有希望的结果,这可能为这些方法在临床实践中的应用铺平了道路。
translated by 谷歌翻译
This paper proposes a novel self-supervised based Cut-and-Paste GAN to perform foreground object segmentation and generate realistic composite images without manual annotations. We accomplish this goal by a simple yet effective self-supervised approach coupled with the U-Net based discriminator. The proposed method extends the ability of the standard discriminators to learn not only the global data representations via classification (real/fake) but also learn semantic and structural information through pseudo labels created using the self-supervised task. The proposed method empowers the generator to create meaningful masks by forcing it to learn informative per-pixel as well as global image feedback from the discriminator. Our experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on the standard benchmark datasets.
translated by 谷歌翻译
Fuzzy logic has been proposed in previous studies for machine diagnosis, to overcome different drawbacks of the traditional diagnostic approaches used. Among these approaches Failure Mode and Effect Critical Analysis method(FMECA) attempts to identify potential modes and treat failures before they occur based on subjective expert judgments. Although several versions of fuzzy logic are used to improve FMECA or to replace it, since it is an extremely cost-intensive approach in terms of failure modes because it evaluates each one of them separately, these propositions have not explicitly focused on the combinatorial complexity nor justified the choice of membership functions in Fuzzy logic modeling. Within this context, we develop an optimization-based approach referred to Integrated Truth Table and Fuzzy Logic Model (ITTFLM) that smartly generates fuzzy logic rules using Truth Tables. The ITTFLM was tested on fan data collected in real-time from a plant machine. In the experiment, three types of membership functions (Triangular, Trapezoidal, and Gaussian) were used. The ITTFLM can generate outputs in 5ms, the results demonstrate that this model based on the Trapezoidal membership functions identifies the failure states with high accuracy, and its capability of dealing with large numbers of rules and thus meets the real-time constraints that usually impact user experience.
translated by 谷歌翻译
Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions -- deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases (~50% for deletion intervention, and ~20% drop in accuracy for negation intervention). We then propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from ~50% to ~6%). We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.
translated by 谷歌翻译
Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95% and 34.51% relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings.
translated by 谷歌翻译
Narrative summarization aims to produce a distilled version of a narrative to describe its most salient events and characters. Summarizing a narrative is challenging as it requires an understanding of event causality and character behaviors. To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset. It contains 122K narrative documents, which are collected from plot descriptions of movies and TV episodes with diverse genres, and their corresponding abstractive summaries. Experiments show that there is a large performance gap between humans and the state-of-the-art summarization models on NarraSum. We hope that this dataset will promote future research in summarization, as well as broader studies of natural language understanding and generation. The dataset is available at https://github.com/zhaochaocs/narrasum.
translated by 谷歌翻译
Many recent perturbation studies have found unintuitive results on what does and does not matter when performing Natural Language Understanding (NLU) tasks in English. Coding properties, such as the order of words, can often be removed through shuffling without impacting downstream performances. Such insight may be used to direct future research into English NLP models. As many improvements in multilingual settings consist of wholesale adaptation of English approaches, it is important to verify whether those studies replicate or not in multilingual settings. In this work, we replicate a study on the importance of local structure, and the relative unimportance of global structure, in a multilingual setting. We find that the phenomenon observed on the English language broadly translates to over 120 languages, with a few caveats.
translated by 谷歌翻译
Providing better language tools for low-resource and endangered languages is imperative for equitable growth. Recent progress with massively multilingual pretrained models has proven surprisingly effective at performing zero-shot transfer to a wide variety of languages. However, this transfer is not universal, with many languages not currently understood by multilingual approaches. It is estimated that only 72 languages possess a "small set of labeled datasets" on which we could test a model's performance, the vast majority of languages not having the resources available to simply evaluate performances on. In this work, we attempt to clarify which languages do and do not currently benefit from such transfer. To that end, we develop a general approach that requires only unlabelled text to detect which languages are not well understood by a cross-lingual model. Our approach is derived from the hypothesis that if a model's understanding is insensitive to perturbations to text in a language, it is likely to have a limited understanding of that language. We construct a cross-lingual sentence similarity task to evaluate our approach empirically on 350, primarily low-resource, languages.
translated by 谷歌翻译
多词表达式(MWE)是一系列单词,共同提出的含义不是从其单个单词中得出的。处理MWE的任务在许多自然语言处理(NLP)应用中至关重要,包括机器翻译和术语提取。因此,在不同领域中检测MWE是一个重要的研究主题。在本文中,我们探索了最新的神经变压器,以检测花和植物名称中的MWES。我们在由植物和花朵百科全书创建的数据集上评估了不同的变压器模型。我们从经验上表明,Transformer模型模型优于基于长期记忆(LSTM)的先前神经模型。
translated by 谷歌翻译
意见摘要是创建摘要的任务,以获取用户评论中的流行意见。在本文中,我们介绍了Geodesic Summarizer(GeoSumm),这是一种新型系统,可执行无监督的提取意见摘要。 GeoSumm涉及基于编码器的表示模型,该模型将文本表示为潜在语义单元的分布。 GeoSumm通过在多个解码器层上对预训练的文本表示进行字典学习来生成这些表示。然后,我们使用这些表示形式使用新型的基于测量距离的评分机制来量化审查句子的相关性。我们使用相关得分来确定流行意见,以构成一般和特定方面的摘要。我们提出的模型GeoSumm在三个意见摘要数据集上实现了最先进的性能。我们执行其他实验来分析模型的功能,并展示跨不同域{\ x}的概括能力。
translated by 谷歌翻译